We study the ability of foundation models to learn representations for classification that are transferable to new, unseen classes. Recent results in the literature show that representations learned by a single classifier over many classes are competitive on few-shot learning problems with representations learned by special-purpose algorithms designed for such problems. We offer an explanation for this phenomenon based on the concept of class-features variability collapse, which refers to the training dynamics of deep classification networks where the feature embeddings of samples belonging to the same class tend to concentrate around their class means. More specifically, we examine the few-shot error of the learned feature map, which is the classification error of the nearest class-center classifier using centers learned from a small number of random samples from each class. Assuming that the classes appearing in the data are selected independently from a distribution, we show that the few-shot error generalizes from the training data to unseen test data, and we provide an upper bound on the expected few-shot error for new classes (selected from the same distribution) using the average few-shot error for the source classes. Additionally, we show that the few-shot error on the training data can be upper bounded using the degree of class-features variability collapse. This suggests that foundation models can provide feature maps that are transferable to new downstream tasks even with limited data available.
translated by 谷歌翻译
我们分析了经过微型批量随机梯度下降(SGD)和重量衰减的深层恢复神经网络。我们研究了SGD噪声的来源,并证明当重量衰减训练时,收敛时唯一的SGD解决方案是零功能。此外,我们在理论和经验上都表明,当使用重量衰减和小批量尺寸的SGD训练神经网络时,预计所得的重量矩阵的排名将很小。我们的分析依赖于最小的假设集,神经网络可能是任意宽或深的,并且可能包括剩余连接以及批处理标准化层。
translated by 谷歌翻译
文献中的最新结果表明,经过分类训练的神经网络的倒数第二层(倒数第二层)表示,展示了一种称为神经崩溃的聚类特性(NC)。我们研究训练深神经网络时,随机梯度下降(SGD)的隐式偏见,有利于低深度溶液。我们表征了有效深度的概念,该概念测量了使用最近级中心分类器可分离样品嵌入的第一层。此外,我们假设和经验表明,SGD隐含地选择了小有效深度的神经网络。其次,尽管即使不可能进行概括,但神经崩溃也会出现 - 我们认为,中间层中的\ emph {可分离性}与概括有关。我们得出了一个基于将网络的有效深度与与部分损坏的标签相同的数据集进行比较最小深度的限制。值得注意的是,这种结合提供了对测试性能的非平凡估计。最后,我们从经验上表明,在增加数据中随机标签的数量时,受过训练的神经网络的有效深度会单调增加。
translated by 谷歌翻译
我们研究了基础模型的能力,以了解可转让给新的看不见的课程的分类的表现。文献中最近的结果表明,单个分类器在许多课程中学到的表示在少量学习问题上具有竞争力,这些问题是由专为这些问题设计的特殊用途算法学习的表示。在本文中,我们基于最近观察到的现象提供了对这种行为的解释,即通过共同计量的分类网络学习的特征显示有趣的聚类属性,称为神经崩溃。理论上,我们在理论上展示了神经崩溃的展示给来自培训类的新样本,更重要的是 - 对于新课程,允许基础模型提供在转移学习中良好工作的特征地图,具体地,少量拍摄设置。
translated by 谷歌翻译
Automatic Speech Recognition (ASR) systems frequently use a search-based decoding strategy aiming to find the best attainable transcript by considering multiple candidates. One prominent speech recognition decoding heuristic is beam search, which seeks the transcript with the greatest likelihood computed using the predicted distribution. While showing substantial performance gains in various tasks, beam search loses some of its effectiveness when the predicted probabilities are highly confident, i.e., the predicted distribution is massed for a single or very few classes. We show that recently proposed Self-Supervised Learning (SSL)-based ASR models tend to yield exceptionally confident predictions that may hamper beam search from truly considering a diverse set of candidates. We perform a layer analysis to reveal and visualize how predictions evolve, and propose a decoding procedure that improves the performance of fine-tuned ASR models. Our proposed approach does not require further training beyond the original fine-tuning, nor additional model parameters. In fact, we find that our proposed method requires significantly less inference computation than current approaches. We propose aggregating the top M layers, potentially leveraging useful information encoded in intermediate layers, and relaxing model confidence. We demonstrate the effectiveness of our approach by conducting an empirical study on varying amounts of labeled resources and different model sizes, showing consistent improvements in particular when applied to low-resource scenarios.
translated by 谷歌翻译
Training a generative model on a single image has drawn significant attention in recent years. Single image generative methods are designed to learn the internal patch distribution of a single natural image at multiple scales. These models can be used for drawing diverse samples that semantically resemble the training image, as well as for solving many image editing and restoration tasks that involve that particular image. Here, we introduce an extended framework, which allows to simultaneously learn the internal distributions of several images, by using a single model with spatially varying image-identity conditioning. Our BlendGAN opens the door to applications that are not supported by single-image models, including morphing, melding, and structure-texture fusion between two or more arbitrary images.
translated by 谷歌翻译
Denoising diffusion models (DDMs) have led to staggering performance leaps in image generation, editing and restoration. However, existing DDMs use very large datasets for training. Here, we introduce a framework for training a DDM on a single image. Our method, which we coin SinDDM, learns the internal statistics of the training image by using a multi-scale diffusion process. To drive the reverse diffusion process, we use a fully-convolutional light-weight denoiser, which is conditioned on both the noise level and the scale. This architecture allows generating samples of arbitrary dimensions, in a coarse-to-fine manner. As we illustrate, SinDDM generates diverse high-quality samples, and is applicable in a wide array of tasks, including style transfer and harmonization. Furthermore, it can be easily guided by external supervision. Particularly, we demonstrate text-guided generation from a single image using a pre-trained CLIP model.
translated by 谷歌翻译
A master face is a face image that passes face-based identity authentication for a high percentage of the population. These faces can be used to impersonate, with a high probability of success, any user, without having access to any user information. We optimize these faces for 2D and 3D face verification models, by using an evolutionary algorithm in the latent embedding space of the StyleGAN face generator. For 2D face verification, multiple evolutionary strategies are compared, and we propose a novel approach that employs a neural network to direct the search toward promising samples, without adding fitness evaluations. The results we present demonstrate that it is possible to obtain a considerable coverage of the identities in the LFW or RFW datasets with less than 10 master faces, for six leading deep face recognition systems. In 3D, we generate faces using the 2D StyleGAN2 generator and predict a 3D structure using a deep 3D face reconstruction network. When employing two different 3D face recognition systems, we are able to obtain a coverage of 40%-50%. Additionally, we present the generation of paired 2D RGB and 3D master faces, which simultaneously match 2D and 3D models with high impersonation rates.
translated by 谷歌翻译
联合学习(FL)是使用Edge设备上可能可用的私人数据训练机器学习模型的新兴范式。 FL的分布式操作引起了集中式机器学习中未遇到的挑战,包括需要保留本地数据集的隐私以及由于重复交换更新模型而导致的通信负载。这些挑战通常通过引起更新模型的某些失真的技术来单独解决,例如当地差异隐私(LDP)机制和有损压缩。在这项工作中,我们提出了一种方法创造的联合隐私增强和量化(JOPEQ),该隐私和量化共同实现了FL环境中的有损压缩和隐私增强。特别是,Jopeq利用基于随机晶格的矢量量化,这是一种通用压缩技术,其副产品失真在统计学上等同于加性噪声。通过使用专用的多元隐私保护噪声来增强模型更新,可以利用这种失真来增强隐私。我们表明,JOPEQ在持有所需的隐私级别的同时,根据所需的比特率同时量化数据,而不会特别影响学习模型的实用性。这是通过分析的LDP保证,失真和收敛范围的推导以及数值研究所示的。最后,我们从经验上断言,乔普克(Jopeq)拆除了已知的普通攻击,以利用隐私泄漏。
translated by 谷歌翻译
最近有很多不可能的结果表明,在与对抗对手的马尔可夫游戏中最小化的遗憾在统计学上和计算上是棘手的。然而,这些结果都没有排除在所有各方采用相同学习程序的假设下,遗憾最小化的可能性。在这项工作中,我们介绍了第一种(据我们所知)在通用马尔可夫游戏中学习的算法,该算法在所有代理商执行时提供了sublinear后悔保证。我们获得的边界是为了置换遗憾,因此,在此过程中,意味着融合了相关的平衡。我们的算法是分散的,计算上有效的,并且不需要代理之间的任何通信。我们的主要观察结果是,在马尔可夫游戏中通过策略优化的在线学习基本上减少了一种加权遗憾的最小化形式,而未知权重由代理商的策略顺序的路径长度确定。因此,控制路径长度会导致加权的遗憾目标,以提供足够的适应性算法提供统一的后悔保证。
translated by 谷歌翻译